Bio-inspired learning has been gaining popularity recently given that Backpropagation (BP) is not considered biologically plausible. Many algorithms have been proposed in the literature which are all more biologically plausible than BP. However, apart from overcoming the biological implausibility of BP, a strong motivation for using Bio-inspired algorithms remains lacking. In this study, we undertake a holistic comparison of BP vs. multiple Bio-inspired algorithms to answer the question of whether Bio-learning offers additional benefits over BP, rather than just biological plausibility. We test Bio-algorithms under different design choices such as access to only partial training data, resource constraints in terms of the number of training epochs, sparsification of the neural network parameters and addition of noise to input samples. Through these experiments, we notably find two key advantages of Bio-algorithms over BP. Firstly, Bio-algorithms perform much better than BP when the entire training dataset is not supplied. Four of the five Bio-algorithms tested outperform BP by upto 5% accuracy when only 20% of the training dataset is available. Secondly, even when the full dataset is available, Bio-algorithms learn much quicker and converge to a stable accuracy in far lesser training epochs than BP. Hebbian learning, specifically, is able to learn in just 5 epochs compared to around 100 epochs required by BP. These insights present practical reasons for utilising Bio-learning rather than just its biological plausibility and also point towards interesting new directions for future work on Bio-learning.
translated by 谷歌翻译
Scene text images have different shapes and are subjected to various distortions, e.g. perspective distortions. To handle these challenges, the state-of-the-art methods rely on a rectification network, which is connected to the text recognition network. They form a linear pipeline which uses text rectification on all input images, even for images that can be recognized without it. Undoubtedly, the rectification network improves the overall text recognition performance. However, in some cases, the rectification network generates unnecessary distortions on images, resulting in incorrect predictions in images that would have otherwise been correct without it. In order to alleviate the unnecessary distortions, the portmanteauing of features is proposed. The portmanteau feature, inspired by the portmanteau word, is a feature containing information from both the original text image and the rectified image. To generate the portmanteau feature, a non-linear input pipeline with a block matrix initialization is presented. In this work, the transformer is chosen as the recognition network due to its utilization of attention and inherent parallelism, which can effectively handle the portmanteau feature. The proposed method is examined on 6 benchmarks and compared with 13 state-of-the-art methods. The experimental results show that the proposed method outperforms the state-of-the-art methods on various of the benchmarks.
translated by 谷歌翻译
细粒度的动作识别是计算机视觉中的一项具有挑战性的任务。由于细粒的数据集在空间和时间空间中具有较小的类间变化,因此细粒度的动作识别模型需要良好的时间推理和属性动作语义的歧视。利用CNN捕获高级时空特征表示能力以及变压器在捕获潜在语义和全球依赖性方面的建模效率,我们研究了两个结合CNN视觉骨干和变压器编码器以增强良好粒度动作识别的框架:1)基于编码器学习潜在的时间语义,以及2)多模式视频文本交叉编码器,以利用其他文本输入并学习视觉语义和文本语义之间的交叉关联。我们的实验结果表明,我们的变压器编码器框架有效地学习潜在的时间语义和跨模式关联,并且比CNN视觉模型改善了识别性能。我们在firgym基准数据集上实现了新的最先进的性能,用于两种拟议的架构。
translated by 谷歌翻译
在本报告中,我们介绍了2022年的Epic-kitchens-100多实体检索挑战的方法。我们首先将句子分解为与动词和名词相对应的语义角色。然后,利用自我攻击来利用语义角色上下文化的视频特征以及通过多个嵌入空间中的三胞胎损失的文本功能。我们的方法在归一化折扣累积增益(NDCG)中覆盖了强大的基线,这对于语义相似性更有价值。我们的提交为NDCG排名第三,地图排名第四。
translated by 谷歌翻译
随着社交媒体的出现,每天都会上传大量的视频剪辑,并使用语言查询来检索最相关的视觉内容变得至关重要。大多数方法旨在学习纯文本和视觉内容的联合嵌入空间,而无需充分利用其模式内结构和模式间相关性。本文提出了一种新颖的变压器,将文本和视频明确地将文本和视频分解为对象,空间环境和时间上下文的语义角色,并具有注意力方案,以学习三个角色之间的内部和角色间相关性,以发现歧视性特征,以发现与不同的匹配水平。流行的YouCook2的初步结果表明,我们的方法超过了当前的最新方法,所有指标的利润很高。它还可以用两个指标覆盖两种SOTA方法。
translated by 谷歌翻译
每天都在社交渠道的普及时上传视频的海洋;因此,通过用户文本查询检索最相关的视频内容起着更为重要的作用。大多数方法仅考虑一个联合嵌入空间,而无需考虑每种模态的局部结构。其他一些方法考虑了分别由全球和局部特征组成的多个嵌入空间,忽略了丰富的模式间相关性。我们提出了一种新型的专家变压器罗马混合物,将文本和视频分为三个层次。空间上下文,时间上下文和对象上下文的角色。我们利用一种基于变压器的注意机制用充分的专家来完全利用全球和局部水平的视觉和文本嵌入,以考虑模式间和结构的相关性。结果表明,我们的方法优于YouCook2和MSR-VTT数据集上的最新方法,但给定相同的视觉主链而无需预训练。最后,我们进行了广泛的消融研究,以阐明我们的设计选择。
translated by 谷歌翻译
本文的重点是具有属性操作的图像检索问题。我们所提出的工作能够在维护其它属性时操纵查询图像的所需属性。例如,查询图像的套环属性可以从圆形到V-N颈改变,以从大型数据集中检索类似的图像。电子商务中的一个关键挑战是图像具有多个属性,用户希望操纵,并且重要的是估计每个属性的判别特征表示。所提出的fashionsearchnet-v2架构能够通过利用其弱监管的本地化模块来学习属性特定表示,该模块忽略了特征空间中属性的不相关特征,从而提高了相似度学习。网络与属性分类和三联排名损失的组合进行了联合培训,以估计本地表示。然后,基于所指的属性操纵,这些本地表示被合并成单个全局表示,其中可以通过距离度量来检索期望的图像。该方法还提供了可解释性,以帮助提供有关网络注意的额外信息。在几个数据集上执行的实验,该数据集在属性的数量方面表明FashionSearchNet-V2优于其他最先进的属性操作技术。与我们之前的工作(FashionsearchNet)不同,我们提出了几种改进了学习程序,并表明所提出的FashionsearchNet-V2可以概括为除了时尚之外的不同域。
translated by 谷歌翻译
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
translated by 谷歌翻译
Neural radiance fields (NeRF) have demonstrated the potential of coordinate-based neural representation (neural fields or implicit neural representation) in neural rendering. However, using a multi-layer perceptron (MLP) to represent a 3D scene or object requires enormous computational resources and time. There have been recent studies on how to reduce these computational inefficiencies by using additional data structures, such as grids or trees. Despite the promising performance, the explicit data structure necessitates a substantial amount of memory. In this work, we present a method to reduce the size without compromising the advantages of having additional data structures. In detail, we propose using the wavelet transform on grid-based neural fields. Grid-based neural fields are for fast convergence, and the wavelet transform, whose efficiency has been demonstrated in high-performance standard codecs, is to improve the parameter efficiency of grids. Furthermore, in order to achieve a higher sparsity of grid coefficients while maintaining reconstruction quality, we present a novel trainable masking approach. Experimental results demonstrate that non-spatial grid coefficients, such as wavelet coefficients, are capable of attaining a higher level of sparsity than spatial grid coefficients, resulting in a more compact representation. With our proposed mask and compression pipeline, we achieved state-of-the-art performance within a memory budget of 2 MB. Our code is available at https://github.com/daniel03c1/masked_wavelet_nerf.
translated by 谷歌翻译
Video anomaly detection (VAD) -- commonly formulated as a multiple-instance learning problem in a weakly-supervised manner due to its labor-intensive nature -- is a challenging problem in video surveillance where the frames of anomaly need to be localized in an untrimmed video. In this paper, we first propose to utilize the ViT-encoded visual features from CLIP, in contrast with the conventional C3D or I3D features in the domain, to efficiently extract discriminative representations in the novel technique. We then model long- and short-range temporal dependencies and nominate the snippets of interest by leveraging our proposed Temporal Self-Attention (TSA). The ablation study conducted on each component confirms its effectiveness in the problem, and the extensive experiments show that our proposed CLIP-TSA outperforms the existing state-of-the-art (SOTA) methods by a large margin on two commonly-used benchmark datasets in the VAD problem (UCF-Crime and ShanghaiTech Campus). The source code will be made publicly available upon acceptance.
translated by 谷歌翻译